考试内容
CKS 考试链接
Important Instructions: CKS
- 考试包括 15-20 项performance-based tasks。
- 2023.1 实测是16道题
- 考生有 2 小时的时间完成 CKS 考试。
- 因为从06/2022开始环境升级(贬义),考试环境更难用了,变的很卡,所以时间变得比较紧张。容易做不完题,建议先把有把握的,花费时间不多的题先做掉
- CKS考试67分以上即可通过,考试不通过有一次补考机会。
Certifications- expire 36 months from the date that the Program certification requirements are met by a candidate.
Certified Kubernetes Security Specialist (CKS)
The following tools and resources are allowed during the exam as long as they are used by candidates to work independently on exam tasks (i.e. not used for 3rd party assistance or research) and are accessed from within the Linux server terminal on which the Exam is delivered.
During the exam, candidates may:
- review the Exam content instructions that are presented in the command line terminal.
- review Documents installed by the distribution (i.e. /usr/share and its subdirectories)
- use the search function provided on https://kubernetes.io/docs/ however, they may only open search results that have a domain matching the sites listed below
- use the browser within the VM to access the following documentation:
- Kubernetes Documentation:
- https://kubernetes.io/docs/ and their subdomains
- https://kubernetes.io/blog/ and their subdomains
This includes all available language translations of these pages (e.g. https://kubernetes.io/zh/docs/)
- Tools:
- Trivy documentation https://aquasecurity.github.io/trivy/
- Falco documentation https://falco.org/docs/
This includes all available language translations of these pages (e.g. https://falco.org/zh/docs/)
- App Armor:
- Kubernetes Documentation:
You’re only allowed to have one other browser tab open with:
- https://kubernetes.io/docs
- https://github.com/kubernetes
- https://kubernetes.io/blog
- https://github.com/aquasecurity/trivy
- https://falco.org/docs
- https://gitlab.com/apparmor/apparmor/-/wikis/Documentation
CKS Environment
- Each task on this exam must be completed on a designated cluster/configuration context.
- Sixteen clusters comprise the exam environment, one for each task. Each cluster is made up of one master node and one worker node.
- An infobox at the start of each task provides you with the cluster name/context and the hostname of the master and worker node.
- You can switch the cluster/configuration context using a command such as the following:
kubectl config use-context <cluster/context name>
- Nodes making up each cluster can be reached via ssh, using a command such as the following:
ssh <nodename>
- You have elevated privileges on any node by default, so there is no need to assume elevated privileges.
- You must return to the base node (hostname cli) after completing each task.
- Nested
−ssh
is not supported. - You can use
kubectl
and the appropriate context to work on any cluster from the base node. When connected to a cluster member via ssh, you will only be able to work on that particular cluster via kubectl. - For your convenience, all environments, in other words, the base system and the cluster nodes, have the following additional command-line tools pre-installed and pre-configured:
kubectl
with kalias and Bash autocompletionyq
andjq
for YAML/JSON processingtmux
for terminal multiplexingcurl
andwget
for testing web servicesman
and man pages for further documentation
- Further instructions for connecting to cluster nodes will be provided in the appropriate tasks
- The CKS environment is currently running etcd v3.5
- The CKS environment is currently running Kubernetes v1.26
- The CKS exam environment will be aligned with the most recent K8s minor version within approximately 4 to 8 weeks of the K8s release date.
More items for CKS than CKA and CKAD
- Pod Security Policies(PSP) - removed from Kubernetes in
v1.25
- AppArmor
- Apiserver
- Apiserver Crash
- Apiserver NodeRestriction
- ImagePolicyWebhook
- kube-bench
- Trivy
CKS 知识点&练习题总结
如何备考
k8s练习环境(同CKA)
CKS练习题
有几个练习库,建议将每个题目都自己亲自操作一遍,一定要操作。
- CKS在线练习环境:https://killercoda.com/killer-shell-cks
- CKS Simulator Kubernetes 1.26
- Adminission pligin/Pod Security Policies 练习
CKS课程
- 【需付费】CKS考试-对应官方课程Kubernetes for Developers (LFS260) 感觉没必要买,只看官方文档就足够了。
常用命令
1 | # 不熟 |
1 | # 常用 |
1 | # CKS |
经验总结(同CKA)
Pre Setup(同CKA)
CKS 2023 真题 1.26
考题1 - AppArmor 访问控制
Context
AppArmor is enabled on the cluster’s worker node. An AppArmor profile is prepared, but not enforced yet.
You may use your browser to open one additional tab to access the AppArmor documentation.
AppArmor 已在 cluster 的工作节点上被启用。一个 AppArmor 配置文件已存在,但尚未被实施。
Task
On the cluster’s worker node, enforce the prepared AppArmor profile located at /etc/apparmor.d/nginx_apparmor
.
Edit the prepared manifest file located at /home/candidate/KSSH00401/nginx-deploy.yaml
to apply the AppArmor profile.
Finally, apply the manifest file and create the pod specified in it.
在 cluster 的工作节点上,实施位于 /etc/apparmor.d/nginx_apparmor
的现有 AppArmor 配置文件。
编辑位于 /home/candidate/KSSH00401/nginx-deploy.yaml
的现有清单文件以应用 AppArmor 配置文件。
最后,应用清单文件并创建其中指定的 Pod 。
Solution
搜索 apparmor(使用 AppArmor 限制容器对资源的访问),接着再搜索字符串 “parser”
https://kubernetes.io/zh/docs/tutorials/security/apparmor/
1 | ### 远程登录到指定工作节点 |
考题2 - Kube-Bench 基准测试
Context
A CIS Benchmark tool was run against the kubeadm-created cluster and found multiple issues that must be addressed immediately.
针对 kubeadm 创建的 cluster 运行 CIS 基准测试工具时, 发现了多个必须立即解决的问题。
Task
Fix all issues via configuration and restart theaffected components to ensure the new settings take effect.
通过配置修复所有问题并重新启动受影响的组件以确保新的设置生效。
Fix all of the following violations that were found against the API server:
修复针对 API 服务器发现的所有以下违规行为:
Ensure that the 1.2.7 --authorization-mode FAIL argument is not set to AlwaysAllow
Ensure that the 1.2.8 --authorization-mode FAIL argument includes Node
Ensure that the 1.2.9 --authorization-mode FAIL argument includes RBAC
Ensure that the 1.2.18 --insecure-bind-address FAIL argument is not set
Ensure that the 1.2.19 --insecure-port FAIL argument is set to 0
Fix all of the following violations that were found against the kubelet:
修复针对 kubelet 发现的所有以下违规行为:
Ensure that the 4.2.1 --anonymous-auth FAIL argument is set to false
Ensure that the 4.2.2 --authorization-mode FAIL argument is not set to AlwaysAllow
Use Webhook
authn/authz where possible. 注意:尽可能使用 Webhook authn/authz。
Fix all of the following violations that were found against etcd:
修复针对 etcd 发现的所有以下违规行为:
Ensure that the 4.2.1 --client-cert-auth FAIL argument is set to true
Solution
1 | $ ssh root@vms65.rhce.cc |
考题3 - Trivy 镜像扫描
Task
使用 Trivy 开源容器扫描器检测 namespace kamino 中 Pod 使用的具有严重漏洞的镜像。
查找具有 High 或 Critical 严重性漏洞的镜像,并删除使用这些镜像的 Pod。
注意:Trivy 仅安装在 cluster 的 master 节点上,在工作节点上不可使用。你必须切换到 cluster 的 master 节点才能使用 Trivy 。
Use the Trivy open-source container scanner to detect images with severe vulnerabilities used by Pods in the namespace kamino
.
Look for images with High
or Critical
severity vulnerabilities, and delete the Pods that use those images.
Trivy is pre-installed on the cluster’s master
node only; it is not available on the base system or the worker nodes. You’ll have to connect to the cluster’s master node to use Trivy
.
Solution
搜索 kubectl images(列出集群中所有运行容器的镜像)
https://kubernetes.io/zh-cn/docs/tasks/access-application-cluster/list-all-running-container-images/
1 | ### 需登录到控制节点操作 |
考题4 - Sysdig & Falco
you may use you brower to open one additonal tab to access sysdig’s documentation or Falco’s documentaion
Task
Use runtime detection tools to detect anomalous processes spawning and executing frequently in the sigle container belonging to Pod redis
. Two tools are avaliable to use:
使用运行时检测工具来检测 Pod tomcat 单个容器中频发生成和执行的异常进程。有两种工具可供使用:
- sysdig
- falco
The tools are pre-installed on the cluster’s worker node only, they are not avaliable on the base system or the master node.
Using the tool of you choice (including any non pre-install tool) analyse the container’s behavior for at least 30
seconds, using filers that detect newly spawing and executing processes, store an incident file at /opt/KSR00101/incidents/summary
, containing the detected incidents one per line in the follwing format:
注:这些工具只预装在 cluster 的工作节点,不在 master 节点。
使用工具至少分析 30 秒,使用过滤器检查生成和执行的进程,将事件写到 /opt/KSR00101/incidents/summary
文件中,其中包含检测的事件, 每个单独一行
格式如下:
1 | [timestamp],[uid],[processName] |
保持工具的原始时间戳格式不变。
注:确保事件文件存储在集群的工作节点上。
1 | # 方法一 sysdig |
1 | # 方法二 falco |
考题5 - ServiceAccount
Context
A Pod fails to run because of an incorrectly specified ServiceAcccount.
Task
create a new ServiceAccount named backend-sa
in the existing namespace qa
, which must not have access to any secrets.
- Inspect the Pods in the namespace
qa
.- Edit the Pod to use the newly created serviceAccount
backend-sa
. - Ensure that the modified specification is applied and the Pod is running.
- Edit the Pod to use the newly created serviceAccount
- Finally, clean-up and delete the now unused serviceAccount in the namespace
qa
.
在现有 namespaceqa
中创建一个名为backend-sa
的新 ServiceAccount, 确保此 ServiceAccount 不自动挂载secrets。
使用/cks/9/pod9.yaml
中的清单文件来创建一个 Pod。
最后,清理 namespaceqa
中任何未使用的 ServiceAccount。
Solution
搜索 serviceaccount(为Pod配置服务账号),接着再搜索字符串 “automount”
https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/configure-service-account/
1 | ### 创建 ServiceAccount |
考题6 - 2022真题v1.20 Pod 安全策略-PodSecurityPolicy
2023的最新考试已经没有这道题了,替代的是Pod Security Standard
Context6
A PodsecurityPolicy shall prevent the creation on of privileged Pods in a specific namespace.
PodSecurityPolicy 应防止在特定 namespace 中特权 Pod 的创建。
Task6
Create a new PodSecurityPolicy
named restrict-policy
, which prevents the creation of privileged Pods.
Create a new ClusterRole
named restrict-access-role
, which uses the newly created PodSecurityPolicy restrict-policy
.
Create a new serviceAccount
named psp-denial-sa
in the existing namespace staging
.
Finally, create a new clusterRoleBinding
named dany-access-bind
, which binds the newly created ClusterRole restrict-access-role
to the newly created serviceAccount psp-denial-sa
.
创建一个名为 restrict-policy
的新的 PodSecurityPolicy,以防止特权 Pod 的创建。
创建一个名为 restrict-access-role
并使用新创建的 PodSecurityPolicy restrict-policy
的 ClusterRole。
在现有的 namespace staging
中创建一个名为 psp-denial-sa
的新 ServiceAccount 。
最后,创建一个名为 dany-access-bind
的 ClusterRoleBinding,将新创建的 ClusterRole restrict-access-role
绑定到新创建的 ServiceAccount psp-denial-sa
。
你可以在一下位置找到模版清单文件: /cks/psp/psp.yaml
Solution6
搜索 runasany(Pod Security Policy)
https://kubernetes.io/id/docs/concepts/policy/pod-security-policy/
搜索 clusterrole(使用RBAC鉴权)
https://kubernetes.io/zh-cn/docs/reference/access-authn-authz/rbac/
1 | ### (1)创建 psp |
考题6 - 2023真题V1.26 Pod Security Standard
Task weight: 8%
Use context: kubectl config use-context workload-prod
There is Deployment container-host-hacker
in Namespace team-red
which mounts /run/containerd
as a hostPath volume on the Node where it’s running. This means that the Pod can access various data about other containers running on the same Node.
To prevent this configure Namespace team-red
to enforce
the baseline
Pod Security Standard. Once completed, delete the Pod of the Deployment mentioned above.
Check the ReplicaSet
events and write the event/log lines containing the reason why the Pod isn’t recreated into /opt/course/4/logs
.
Answer
Making Namespaces use Pod Security Standards works via labels. We can simply edit it:
1 | k edit ns team-red |
Now we configure the requested label:
1 | # kubectl edit namespace team-red |
This should already be enough for the default Pod Security Admission Controller to pick up on that change. Let’s test it and delete the Pod to see if it’ll be recreated or fails, it should fail!
1 | ➜ k -n team-red get pod |
Usually the ReplicaSet of a Deployment would recreate the Pod if deleted, here we see this doesn’t happen. Let’s check why:
1 | ➜ k -n team-red get rs |
There we go! Finally we write the reason into the requested file so that Mr Scoring will be happy too!
1 | # /opt/course/4/logs |
Pod Security Standards can give a great base level of security! But when one finds themselves wanting to deeper adjust the levels like baseline
or restricted
… this isn’t possible and 3rd party solutions like OPA could be looked at.
考题7 - NetworkPolicy - default-deny
Context
A default-deny NetworkPolicy avoids to accidentally expose a Pod in a namespace that doesn’t have any other NetworkPolicy defined.
一个默认拒绝(default-deny)的 NetworkPolicy 可避免在未定义任何其他 NetworkPolicy 的 namespace 中意外公开 Pod。
Task
Create a new default-deny NetworkPolicy named denynetwork
in the namespace development
for all traffic of type Ingress
.
The new NetworkPolicy must deny all ingress traffic in the namespace development
.
Apply the newly created default-deny
NetworkPolicy to all Pods running in namespace development
.
You can find a skeleton manifest file at /cks/15/p1.yaml
为所有类型为 Ingress+Egress 的流量在 namespace testing 中创建一个名为 denynetwork
的新默认拒绝 NetworkPolicy。 此新的 NetworkPolicy 必须拒绝 namespace testng 中的所有的 Ingress + Egress 流量。
将新创建的默认拒绝 NetworkPolicy 应用与在 namespace testing 中运行的所有 Pod。
你可以在 /cks/15/p1.yaml
找到一个模板清单文件。
1 | ### 修改模板清单文件 |
考题8 - NetworkPolicy
Task
create a NetworkPolicy named pod-restriction
to restrict access to Pod products-service
running in namespace dev-team
.
Only allow the following Pods to connect to Pod products-service
:
- Pods in the namespace
qa
- Pods with label
environment: testing
, in any namespace
Make sure to apply the NetworkPolicy.
You can find a skelet on manifest file at /cks/6/p1.yaml
创建一个名为 pod-restriction
的 NetworkPolicy 来限制对在 namespace dev-team
中运行的 Pod products-service
的访问。只允许以下 Pod 连接到 Pod products-service
:
- namespace
qa
中的 Pod - 位于任何 namespace,带有标签
environment: testing
的 Pod
注意:确保应用 NetworkPolicy。
你可以在/cks/net/po.yaml
找到一个模板清单文件。
Solution
搜索 networkpolicy(网络策略)
https://kubernetes.io/zh-cn/docs/concepts/services-networking/network-policies/
1 | ### (1)查看命名空间 qa 具有的标签 |
1 | apiVersion: networking.k8s.io/v1 |
1 | ### 应用清单文件 |
考题9 - RBAC
Context
绑定到 Pod 的 ServiceAccount 的 Role 授予过度宽松的权限。完成以下项目以减少权限集。
A Role bound to a Pod’s serviceAccount grants overly permissive permissions.
Complete the following tasks to reduce the set of permissions.
Task
一个名为 web-pod
的现有 Pod 已在 namespace db
中运行。编辑绑定到 Pod 的 ServiceAccount service-account-web
的现有 Role, 仅允许只对 services 类型的资源执行 get
操作。
- 在 namespace
db
中创建一个名为role-2
,并仅允许只对namespaces
类型的资源执行delete
操作的新 Role。 - 创建一个名为
role-2-binding
的新 RoleBinding,将新创建的 Role 绑定到 Pod 的 ServiceAccount。
注意:请勿删除现有的 RoleBinding。
Given an existing Pod named web-pod
running in the namespace db
. Edit the existing Role bound to the Pod’s serviceAccount sa-dev-1
to only allow performing list
operations, only on resources of type Endpoints
.
- create a new Role named
role-2
in the namespacedb
, which only allows performingupdate
operations, only on resources of typepersistentvolumeclaims
- create a new RoleBinding named
role-2-binding
binding the newly created Role to the Pod’s serviceAccount.
Don’t delete the existing RoleBinding
solution
搜索 clusterrole(使用RBAC鉴权)
https://kubernetes.io/zh-cn/docs/reference/access-authn-authz/rbac/
1 | ### 查询 sa 对应的 role 名称(假设是 role-1) |
考题10 - kube-apiserver 审计日志记录和采集
Task
Enable audit logs in the cluster.
To do so, enable the log backend, and ensure that:
- logs are stored at
/var/log/kubernetes/audit-logs.txt
- log files are retained for
10
days - at maximum, a number of
2
auditlog files are retained
A basic policy is provided at/etc/kubernetes/logpolicy/sample-policy.yaml
. it only specifies what not to log. The base policy is located on the cluster’smaster
node.
Edit and extend the basic policy to log:
namespaces
changes atRequestResponse
level- the request body of pods changes in the namespace
front-apps
configMap
andsecret
changes in all namespaces at theMetadata
level- Also, add a
catch-all
ruie to log all other requests at theMetadata
level.
Don’t forget to apply the modifiedpolicy.
/etc/kubernetes/logpolicy/sample-policy.yaml
在 cluster 中启用审计日志。为此,请启用日志后端,并确保:
- 日志存储在
/var/log/kubernetes/audit-logs.txt
- 日志文件能保留
10
天 - 最多保留
2
个旧审计日志文件
/etc/kubernetes/logpolicy/sample-policy.yaml 提供了基本策略。它仅指定不记录的内容。
注意:基本策略位于 cluster 的 master 节点上。
编辑和扩展基本策略以记录:
- RequestResponse 级别的 cronjobs 更改
- namespace front-apps 中 deployment 更改的请求体
- Metadata 级别的所有 namespace 中的 ConfigMap 和 Secret 的更改
- 此外,添加一个全方位的规则以在 Metadata 级别记录所有其他请求。
注意:不要忘记应用修改后的策略
Solution
搜索 audit(审计)
https://kubernetes.io/zh-cn/docs/tasks/debug/debug-cluster/audit/
1 | ### 如果没有 /var/log/kubernetes/,则创建目录 |
考题11 - Secret
Task
Retrieve the content of the existing secret named db1-test
in the istio-system
namespace.
store the username field in a file named /home/candidate/user.txt
, and the password field in a file named /home/candidate/pass
.txt.
You must create both files; they don’t exist yet.
Do not use/modify the created files in the following steps, create new temporary files if needed.
Create a new secret named db2-test
in the istio-system
namespace, with the following
1 | username : production-instance |
Finally, create a new Pod that has access to the secret db2-test via a volume:
- Pod name
secret-pod
- Namespace
istio-system
- container name
dev-container
- image
nginx
- volume name
secret-volume
- mount path
/etc/secret
在 namespace istio-system
中获取名为 db1-test
的现有 secret 的内容.
将 username 字段存储在名为 /cks/sec/user.txt
的文件中,并将 password 字段存储在名为 /cks/sec/pass.txt
的文件中。
注意:你必须创建以上两个文件,他们还不存在。
注意:不要在以下步骤中使用/修改先前创建的文件,如果需要,可以创建新的临时文件。
在 istio-system
namespace 中创建一个名为 db2-test
的新 secret,内容如下:
1 | username : production-instance |
最后,创建一个新的 Pod,它可以通过卷访问 secret db2-test
:
- Pod 名称
secret-pod
- Namespace
istio-system
- 容器名
dev-container
- 镜像
nginx
- 卷名
secret-volume
- 挂载路径
/etc/secret
Solution
搜索 secret(使用 kubectl 管理 Secret)
https://kubernetes.io/zh-cn/docs/tasks/configmap-secret/managing-secret-using-kubectl/
搜索 secret(Secret)
https://kubernetes.io/zh-cn/docs/concepts/configuration/secret/
1 | ### 检索已存在的 secret,将获取到的用户名和密码字段存储到指定文件 |
考题12 - dockerfile 和 deployment 安全优化
Task
- Analyze and edit the given Dockerfile (based on the ubuntu:16.04 image)
/cks/docker/Dockerfile
fixing two instructions present in the file being prominent security/best-practice issues. - Analyze and edit the given manifest file
/cks/docker/deployment.yaml
fixing two fields present in the file being prominent security/best-practice issues.
Don’t add or remove configuration settings; only modify the existing configuration settings, so that two configuration settings each are no longer security/best-practice concerns.
Should you need an unprivileged user for any of the tasks, use user nobody
with user id 65535
.
- 分析和编辑给定的 Dockerfile
/cks/docker/Dockerfile
(基于 ubuntu:16.04 镜像),并修复在文件中拥有的突出的安全/最佳实践问题的两个指令。- 分析和编辑给定的清单文件
/cks/docker/deployment.yaml
,并修复在文件中拥有突出的安全/最佳实践问题的两个字段。注意:请勿添加或删除配置设置;只需修改现有的配置设置让以上两个配置设置都不再有安全/最佳实践问题。
注意:如果您需要非特权用户来执行任何项目,请使用用户 ID 65535 的用户 nobody 。
答题: 注意,本次的 Dockerfile 和 deployment.yaml 仅修改即可,无需部署。
Solution 12
1 | ### 修复 dockerfile 文件中存在的两个安全/最佳实践指令 |
考题13 - admission-controllers - ImagePolicyWebhook
Context
A container image scanner is set up on the cluster, but it’s not yet fully integrated into the cluster’s configuration. When complete, the container image scanner shall scan for and reject the use of vulnerable images.
cluster 上设置了容器镜像扫描器,但尚未完全集成到 cluster 的配置中。完成后,容器镜像扫描器应扫描并拒绝易受攻击的镜像的使用。
Task
You have to complete the entire task on the cluster’s master
node, where all services and files have been prepared and placed.
Given an incomplete configuration in directory /etc/kubernetes/epconfig
and a functional container image scanner with HTTPS endpoint https://acme.local:8082/image_policy
:
- Enable the necessary plugins to create an image policy
- validate the control configuration and change it to an implicit deny
- Edit the configuration to point to the provided HTTPS endpoint correctly.
- Finally , test if the configuration is working by trying to deploy the vulnerable resource
/cks/1/web1.yaml
You can find the container image scanner’s log file at /var/loglimagepolicyiacme.log
注意:你必须在 cluster 的 master 节点上完成整个考题,所有服务和文件都已被准备好并放置在该节点上。 给定一个目录 /etc/kubernetes/epconfig
中不完整的配置以及具有 HTTPS 端点 https://acme.local:8082/image_policy
的功能性容器镜像扫描器:
- 启用必要的插件来创建镜像策略
- 校验控制配置并将其更改为隐式拒绝(implicit deny)
- 编辑配置以正确指向提供的 HTTPS 端点
- 最后,通过尝试部署易受攻击的资源
/cks/img/web1.yaml
来测试配置是否有效。
你可以在 /var/log/imagepolicy/roadrunner.log
找到容器镜像扫描仪的日志文件。
Solution
搜索 imagepolicywebhook(使用准入控制器),接着再搜索字符串"imagepolicywebhook"
https://kubernetes.io/zh-cn/docs/reference/access-authn-authz/admission-controllers/
1 | # 0 master node |
1 | { |
1 |
|
考题14 - 删除非无状态或非不可变的 pod
Context: it is best-practice to design containers to be stateless and immutable
Task
Inspect Pods runnings in namespace production
and delete any Pod that is either not stateless or not immutable.
Use the following strict interpretation of stateless and immutable:
- Pod being able to store data inside containers must be treated as not stateless. You don’t have to worry whether data is actually stored inside containers or not already.
- Pod being configured to be
privileged
in any way must be treated as potentially not stateless and not immutable.
检查在 namespace production 中运行的 Pod,并删除任何非无状态或非不可变的 Pod。
使用以下对无状态和不可变的严格解释:
- 能够在容器内存储数据的 Pod 的容器必须被视为非无状态的。
- 被配置为任何形式的特权 Pod 必须被视为可能是非无状态和非不可变的。
注意:你不必担心数据是否实际上已经存储在容器中。
1 | ### 在命名空间 dev 中检查 running 状态的 pod |
考题15 - gVisor/runtimeclass
Context
This cluster uses containerd as CRI runtime.
Containerd’s default runtime handler is runc
. Containerd has been prepared to support an additional runtime handler , runsc
(gVisor).
该 cluster 使用 containerd 作为 CRI 运行时。
containerd 的默认运行时处理程序是runc
。 containerd 已准备好支持额外的运行时处理程序runsc
(gVisor)。
Task
Create a RuntimeClass named untrusted
using the prepared runtime handler named runsc
.
Update all Pods in the namespace server
to run on gvisor, unless they are already running on anon-default
runtime handler.
You can find a skeleton manifest file at /cks/13/rc.yaml
使用名为
runsc
的现有运行时处理程序,创建一个名为untrusted
的 RuntimeClass。
更新 namespaceserver
中的所有 Pod 以在 gVisor 上运行。
您可以在/cks/gVisor/rc.yaml
中找到一个模版清单
Solution
搜索 runtimeclass(容器运行时类)
https://kubernetes.io/zh-cn/docs/concepts/containers/runtime-class/
1 | ### 修改模板清单文件 |
考题16 - 启用 API Server 认证
Context
由 kubeadm 创建的 cluster 的 Kubernetes API 服务器,出于测试目的,临时配置允许未经身份验证和未经授权的访问,授予匿名用户 cluster-admin
的访问权限。
Task
重新配置 cluster 的 Kubernetes API 服务器,以确保只允许经过身份验证和授权的 REST 请求。
使用授权模式 Node
,RBAC
和准入控制器 NodeRestriction。
删除用户 system:anonymous
的 ClusterRoleBinding 来进行清理。
注意:所有 kubectl 配置环境/文件也被配置使用未经身份验证和未经授权的访问。 你不必更改它,但请注意,一旦完成 cluster 的安全加固, kubectl 的配置将无法工作。 您可以使用位于 cluster 的 master 节点上,cluster 原本的 kubectl 配置文件 /etc/kubernetes/admin.conf ,以确保经过身份验证的授权的请求仍然被允许。
1 | k get nodes |
参考文章
- CKS 真题
- CKS其他